human-level ai
Human-level AI is not inevitable. We have the power to change course Garrison Lovely
"Technology happens because it is possible," OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had.
- Asia > Russia (0.15)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- Government > Military (1.00)
- Energy (1.00)
Superalignment with Dynamic Human Values
Mai, Florian, Kaczér, David, Corrêa, Nicholas Kluge, Flek, Lucie
Two core challenges of alignment are 1) scalable oversight and 2) accounting for the dynamic nature of human values. While solutions like recursive reward modeling address 1), they do not simultaneously account for 2). We sketch a roadmap for a novel algorithmic framework that trains a superhuman reasoning model to decompose complex tasks into subtasks that are still amenable to human-level guidance. Our approach relies on what we call the part-to-complete generalization hypothesis, which states that the alignment of subtask solutions generalizes to the alignment of complete solutions. We advocate for the need to measure this generalization and propose ways to improve it in the future.
An AI Pause Is Humanity's Best Bet For Preventing Extinction
The existential risks posed by artificial intelligence (AI) are now widely recognized. After hundreds of industry and science leaders warned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI's ultimate threat. In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list.
- North America > United States (0.04)
- Europe > Estonia > Harju County > Tallinn (0.04)
- Asia > China (0.04)
- Asia > Bangladesh (0.04)
AI Timelines: What Do Experts in Artificial Intelligence Expect for the Future?
Artificial intelligence that surpasses our own intelligence sounds like the stuff from science fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously? A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is "able to learn to do anything that a human can do," as Norvig and Russell put it in their textbook on AI.1 It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions.
Benefits & Risks of Artificial Intelligence - Future of Life Institute
"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before - as long as we manage to keep the technology beneficial." From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. However, the long-term goal of many researchers is to create general AI (AGI or strong AI).
- Information Technology > Robotics & Automation (0.49)
- Energy > Power Industry (0.47)
- Transportation > Passenger (0.35)
- Transportation > Ground > Road (0.35)
Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs?
Technology companies are racing to develop human-level artificial intelligence, whose development poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised 20 million dollars to start Keen Technologies, a company devoted to building fully human-level AI. He is not the only one. There are currently 72 projects around the world focused on developing a human-level AI, also known as an AGI -- meaning an AI which can do any cognitive task at least as well as humans can. Many have raised concerns about the effects that even today's use of artificial intelligence, which is far from human-level, already has on our society.
- North America > United States > California > San Francisco County > San Francisco (0.16)
- Asia > Myanmar (0.05)
- Asia > India (0.05)
- Africa > Ethiopia (0.05)
- Information Technology (1.00)
- Leisure & Entertainment > Games > Computer Games (0.57)
Studying the brain to build AI that processes language as people do
AI has made impressive strides in recent years, but it's still far from learning language as efficiently as humans. For instance, children learn that "orange" can refer to both a fruit and color from a few examples, but modern AI systems can't do this nearly as efficiently as people. This has led many researchers to wonder: Can studying the human brain help to build AI systems that can learn and reason like people do? Today, Meta AI is announcing a long-term research initiative to better understand how the human brain processes language. In collaboration with neuroimaging center Neurospin (CEA) and INRIA we're comparing how AI language models and the brain respond to the same spoken or written sentences.
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Meta's new long-term AI study sounds a lot like OpenAI's current dead-end
Meta recently announced a long-term research partnership to study the human brain. According to the company, it intends to use the results of this study to "guide the development of AI that processes speech and text as efficiently as people." This is the latest in Meta's ongoing quest to perform the machine learning equivalent of alchemy: producing thought from language. The big idea: Meta wants to understand exactly what's going on in people's brains when they process language. Then, somehow, it's going to use this data to develop an AI capable of understanding language. According to Meta AI, the company spent the past two years developing an AI system to process datasets of brainwave information in order to glean insights into how the brain handles communication.
Meta's Yann LeCun strives for human-level AI
Did you miss a session at the Data Summit? What is the next step toward bridging the gap between natural and artificial intelligence? Scientists and researchers are divided on the answer. Yann LeCun, Chief AI Scientist at Meta and the recipient of the 2018 Turing Award, is betting on self-supervised learning, machine learning models that can be trained without the need for human-labeled examples. LeCun has been thinking and talking about self-supervised and unsupervised learning for years.